These notes are based on Chapter 12 of JMR — Jones, O., Maillardet, R., & Robinson, A. (2009). Introduction to scientific programming and simulation using R. CRC Press.

Introduction

Optimisation concerns search for the maximum (or minimum) of a function. An optimisation problem can be expressed quite compactly as follows: \[ \max_{x\in\mathcal{D}} f(x) \quad\text{or}\quad \underset{x\in\mathcal{D}}{\operatorname{argmax}}f(x),\] where \(\mathcal{D}\) is a subset of the function’s domain, over which we try to find a maximiser. The former is the (global) maximum, while the latter is a set of (global) maximisers. In other words, a maximiser \(x^*\in\operatorname{argmax}f(x)\) satisfies \[ f(x)\leq f(x^*),\; \forall x\in\mathcal{D}. \]

In these notes, we discuss only maximisation problems because any minimisation problem can be readily transformed into a maximisation problem as follows: \[ \underset{x\in\mathcal{D}}{\operatorname{argmin}} f(x) = \underset{x\in\mathcal{D}}{\operatorname{argmax}} -f(x).\]

Global maximisers are generally difficult to find because our knowledge of \(f\) is only local. In other words, we can evaluate \(f\) at only a finite number of points and obtain a partial picture of the overall shape of \(y=f(x)\). In one-dimensional cases \((\mathcal{D}\subset\mathbb{R})\), partial pictures may be sufficient for locating global maxima. For example, it is easy to evaluate \(f(x)=\sin(3\pi x)/x+x^3-5x^2+6x\) at 301 points and obtain the following graph.

curve(sin(3*pi*x)/x+x^3-5*x^2+6*x, from=.26, to=3.6, ylab="y", n=301)
title(expression(y==sin(3*pi*x)/x+x^3-5*x^2+6*x))

However, modern optimisation problems often involve complex objective functions defined on high-dimensional spaces (e.g., deep neural networks). In these cases, exhaustive search is infeasible and we have to be content with local maxima. Recall that a point \(x^*\) is a local maximiser if there exists a neighbourhood \(\mathcal{N}\subset\mathcal{D}\) of \(x^*\) such that \(f(x)\leq f(x^*)\) for all \(x\in\mathcal{N}\).

That said, there are various techniques for global optimisation, one of which is Bayesian optimisation.

Testbeds

In these notes, we use two maximisation problems to test our algorithms. The first one: \[ \max_{x\in\mathcal{D}} f_1(x) = \frac{\sin(3\pi x)}{x}+x^3-5x^2+6x .\] where \(\mathcal{D} = [0.2, 3.4]\).

Here is a R function that returns \(f_1\), \(f_1'\), and \(f_1''\).

test1 <- function(x) {
  f <- sin(3*pi*x)/x+x^3-5*x^2+6*x
  grad <- (3*pi*x*cos(3*pi*x) - sin(3*pi*x))/x^2 + 3*x^2 - 10*x + 6
  H <- -(9*pi^2*x^2*sin(3*pi*x) + 2*(3*pi*x*cos(3*pi*x) - sin(3*pi*x)))/x^3
        + 6*x - 10
  
  return(c(f,grad,H))
}

The second one: \[ \max_{(x,y)\in\mathcal{D}} f_2(x,y) =\sin\left(\frac{x^2}{2} - \frac{y^2}{4}\right)\cos\left(2x-e^y\right) .\] where \(\mathcal{D} = [-0.5, 3]\times[-0.5, 2]\). The following is a surface plot.

# c.f. https://stackoverflow.com/a/45424943
X <- seq(-.5, 3, length=100)
Y <- seq(-.5, 2, length=100)
XY <- expand.grid(X=X, Y=Y) # create a grid
f2 <- function(x, y){
  sin(x^2/2 - y^2/4)*cos(2*x-exp(y))
}
Z <- f2(XY$X, XY$Y)
s = interp(x=XY$X, y=XY$Y, z=Z)
# https://stackoverflow.com/a/61118759 for z=t(s$z)
plot_ly(x=s$x, y=s$y, z=t(s$z)) %>% add_surface()

Here is a R function that returns \(f\), \(\nabla f\), and the Hessian of \(f\).

test2 <- function(xx) {
  x <- xx[1]
  y <- xx[2]
  f <- sin(x^2/2-y^2/4)*cos(2*x-exp(y))
  f1 <- x*cos(2*x-exp(y))*cos(x^2/2-y^2/4) -
          2*sin(2*x-exp(y))*sin(x^2/2-y^2/4)
  f2 <- -(y*cos(2*x-exp(y))*cos(x^2/2-y^2/4))/2 +
          exp(y)*sin(2*x-exp(y))*sin(x^2/2-y^2/4)
  f11 <- cos(exp(y)-2*x)*cos(x^2/2-y^2/4) -
           x^2*sin(x^2/2-y^2/4)*cos(exp(y)-2*x) +
           4*x*sin(exp(y)-2*x)*cos(x^2/2-y^2/4) -
           4*sin(x^2/2-y^2/4)*cos(exp(y)-2*x)
  f12 <- -x*exp(y)*sin(exp(y)-2*x)*cos(x^2/2-y^2/4) -
           y*sin(exp(y)-2*x)*cos(x^2/2-y^2/4) +
           2*exp(y)*sin(x^2/2-y^2/4)*cos(exp(y)-2*x) +
           x*y*sin(x^2/2-y^2/4)*cos(exp(y)-2*x)/2
  f22 <- -exp(y)*sin(exp(y)-2*x)*sin(x^2/2-y^2/4) -
           cos(exp(y)-2*x)*cos(x^2/2-y^2/4)/2 -
           y^2*sin(x^2/2-y^2/4)*cos(exp(y)-2*x)/4 +
           exp(y)*y*sin(exp(y)-2*x)*cos(x^2/2-y^2/4) -
           exp(2*y)*sin(x^2/2-y^2/4)*cos(exp(y)-2*x)

  return(list(f, c(f1,f2), matrix(c(f11,f12,f12,f22),2,2)))
}

This is, to say the least, nasty. You may use derive R function to compute the gradient and Hessian.

Golden-section method

The golden-section method is based on the same “bracketing” idea used in the bisection method for root finding and does not require derivative information. Here, we successively isolates smaller intervals (brackets) that contain a local maximum.

Instead of two in root finding, to isolate a local maximum, we use three points: \(x_l\), \(x_r\), and \(x_m\) such that \(x_m\in(x_l,x_r)\). A key observation is:

\(f(x_l) \leq f(x_m)\) and \(f(x_r) \leq f(x_m)\) imply the existence of a local maximum in the interval \([x_l,x_r]\).

Similar to the bisection method, the golden-section method starts with some \(x_m\in(x_l,x_r)\) such that \(f(x_l) \leq f(x_m)\) and \(f(x_r) \leq f(x_m)\).

Algorithm

Below is the algorithm, where \(f(x_m)\) is an estimate of the local maximum and kept updated until the stopping criterion is met.

  1. Take a larger subinterval between \((x_l,x_m)\) and \((x_m,x_r)\) and denote it by \(I\).
  2. Pick a point \(y\in I\) using the golden ratio (explained below).
  3. Do one of the following:
    • If \(f(y)\geq f(x_m)\), update the estimate with \(f(y)\) and shrink the interval \((x_l,x_r)\) to \(I\).
    • Otherwise, shrink the interval \((x_l,x_r)\) to \((x_l,y)\) when \(x_r\in I\) or to \((y,x_r)\) when \(x_l\in I\).
  4. Halt if the length of the updated interval is smaller than \(\epsilon\); otherwise go back to #1.

The idea at #3 is that we want to shrink the bracket while keeping the estimate of the maximiser in the bracket. If \(f(y)<f(x_m)\), we cannot update the estimate but still want to shrink the bracket.

Golden ratio

At Step #2 above, the golden ratio arises when our motivation is to choose \(y\) so that the ratio of the lengths of the larger to the smaller intervals remains the same. As an example, suppose we find ourselves in the following situation.

curve(-x^2, xaxt="n", yaxt="n", ann=FALSE, from=-1, to=1)
letters <- c(expression(x[l]),expression(x[m]),"y",expression(x[r]))
from_x <- c(-.9,-.1,.25,.9)
axis(1, from_x, labels=letters)
from_y <- rep(-1.2, 4)
to_x <- from_x
to_y <- sapply(from_x, function(x) return(-x^2))
segments(from_x, from_y, to_x, to_y, lty=3)
arrows(from_x[1], -.9, to_x[2], -.9, length=.1, code=3)
text((to_x[1]+to_x[2])/2, -.9, "a", pos=3)
arrows(from_x[2], -.9, to_x[4], -.9, length=.1, code=3)
text((to_x[2]+to_x[4])/2, -.9, "b", pos=3)
arrows(from_x[2], -.6, to_x[3], -.6, length=.1, code=3)
text((to_x[2]+to_x[3])/2, -.6, "c", pos=3)
points(to_x[2:3], to_y[2:3])
text(to_x[2], to_y[2]+.01, expression(f(x[m])), pos=2)
text(to_x[3], to_y[3]+.01, expression(f(y)), pos=4)

As you can see, the right subinterval is larger, so \(I = (x_m,x_r)\). Then, we will pick \(y\in I\) (equivalently, pick \(c\)) such that \(\frac{a}{c} = \frac{b}{a}\) as \(f(y)<f(x_m)\).

Instead, if below is the situation, where we get a better estimate of the maximum, then we will pick \(y\) (or \(c\)) such that \(\frac{b-c}{c} = \frac{b}{a}\) as \(f(y)\geq f(x_m)\).

curve(-x^2, xaxt="n", yaxt="n", ann=FALSE, from=-1, to=1)
letters <- c(expression(x[l]),expression(x[m]),"y",expression(x[r]))
from_x <- c(-.9,-.2,.1,.9)
axis(1, from_x, labels=letters)
from_y <- rep(-1.2, 4)
to_x <- from_x
to_y <- sapply(from_x, function(x) return(-x^2))
segments(from_x, from_y, to_x, to_y, lty=3)
arrows(from_x[1], -.9, to_x[2], -.9, length=.1, code=3)
text((to_x[1]+to_x[2])/2, -.9, "a", pos=3)
arrows(from_x[2], -.9, to_x[4], -.9, length=.1, code=3)
text((to_x[2]+to_x[4])/2, -.9, "b", pos=3)
arrows(from_x[2], -.6, to_x[3], -.6, length=.1, code=3)
text((to_x[2]+to_x[3])/2, -.6, "c", pos=3)
points(to_x[2:3], to_y[2:3])
text(to_x[2], to_y[2]+.01, expression(f(x[m])), pos=2)
text(to_x[3], to_y[3]+.01, expression(f(y)), pos=4)

So, even in the same \(I = (x_m,x_r)\) case, two situations can arise from our pick of \(y\) and we do not know which, because \(f(y)\) is unknown before picking \(y\) and evaluating \(f(y)\). So, our motivation is to pick \(y\) (or \(c\)) no matter which scenario happens to arise.

Formally, pick \(c\) such that \[ \frac{a}{c} = \frac{b}{a} \quad \text{and} \quad \frac{b-c}{c} = \frac{b}{a} .\] Or, simply \[c = b - a.\]

Plugging this \(c\) in either equation, we get \[\begin{gather} \frac{b}{a} = \frac{a}{b-a}\\ \Leftrightarrow\quad \left(\frac{b}{a}\right)^2-\left(\frac{b}{a}\right)-1 = 0 \end{gather}\] A positive root is the golden-ratio \[\frac{b}{a} = \frac{1+\sqrt{5}}{2}.\]

In short, given three points \(x_l\), \(x_m\), and \(x_r\), we choose \[ y = x_m + c = x_m + \frac{a^2}{b} = x_m + \frac{2(x_m-x_l)}{1+\sqrt{5}}.\]

Try to see what will happen if the left subinterval is larger and \(I = (x_l,x_m)\).

Application

Let’s apply it to \(f(x)=\sin(3\pi x)/x+x^3-5x^2+6x\).

gsection <- function(f, x.l, x.r, x.m, tol=1e-8) {
  f.l <- f(x.l)[1]
  f.m <- f(x.m)[1]
  f.r <- f(x.r)[1]
  gr <- (1 + sqrt(5))/2

  n <- 0
  while ((x.r - x.l) > tol) {
    n <- n + 1
    if ((x.r - x.m) > (x.m - x.l)) {
      y <- x.m + (x.m - x.l)/gr
      f.y <- f(y)[1]
      if (f.y >= f.m) {
        x.l <- x.m
        f.l <- f.m
        x.m <- y
        f.m <- f.y
      } else {
        x.r <- y
        f.r <- f.y
      }
    } else {
      y <- x.m - (x.r - x.m)/gr
      f.y <- f(y)[1]
      if (f.y >= f.m) {
        x.r <- x.m
        f.r <- f.m
        x.m <- y
        f.m <- f.y
      } else {
        x.l <- y
        f.l <- f.y
      }
    }
  }
  
  return(list("f"=f(x.m)[1], "x.m"=x.m, "Number of steps"=n))
}
gsection(test1, 1.1, 2.1, 1.7)
## $f
## [1] 1.851551
## 
## $x.m
## [1] 1.455602
## 
## $`Number of steps`
## [1] 39

Try other triples and see which local maxima you find.

Gradient descent

Thanks to the popularity of machine learning, especially neural networks, gradient descent has been studied extensively as a class of optimisation methods.

Since we are doing maximisation, “gradient ascent” may be an appropriate title. But, the term “gradient descent” is so common that we stick to it.

Recall that a gradient \(\nabla f\) is a vector of partial derivatives of \(f\), containing information about the direction of fastest increase in \(f\) (as well as the rate of increase).

When searching for a local maximum, it is almost natural, albeit naïve, to follow the direction of gradient at each iteration. Hence, the algorithm. \[ x_{n+1} = x_n + \alpha \nabla f(x_n).\]

\(\alpha\) is a step size that determines how far we ascend in the direction of \(\nabla f(x_n)\).

Remember that, unless \(f\) is linear, \(\nabla f\) is a function of \(x\) and \(\nabla f(x_n)\) will not tell the steepest direction once we move out of \(x_n\). So, as a rule of thumb, we should take a small step to the extent of computational capacity in hand.

Application

Let’s first apply it to the 1-D problem.

gd1 <- function(fg, x0, alpha=1e-2, tol=1e-8, n.max=1000) {
  x <- x0
  fg.x <- fg(x)
  n <- 0
  while ((abs(alpha*fg.x[2]) > tol) & (n < n.max)) {
    x <- x + alpha*fg.x[2]
    fg.x <- fg(x)
    n <- n + 1
  }
  return(list("f"=fg(x)[1], "x"=x, "Number of steps"=n))
}

Note the stopping criteria. Besides the maximum number of iterations, it uses \(|\alpha f'(x)|\leq\epsilon\) because that is an actual step to be taken and \(f'(x^*)=0\) at a local maximum.

Try other stopping criteria such as \(|f(x_{n+1})-f(x_n)|\leq\epsilon\) and \(\|x_{n+1}-x_n\|\leq\epsilon\).

gd1(test1, 1.3)
## $f
## [1] 1.851551
## 
## $x
## [1] 1.455602
## 
## $`Number of steps`
## [1] 19

Identify all local maxima by changing initial points. You can eyeball the graph and determine points from which to hill-climb. What will happen if you start at \(x\leq0.4\) or \(x\geq3.1\)?

Next, to the 2-D problem.

gd2 <- function(fg, x0, alpha=1e-2, tol=1e-7, n.max=1e4) {
  x <- x0
  grad <- fg(x)[[2]]
  
  n <- 0
  while ((max(abs(alpha*grad)) > tol) & (n < n.max)) {
    x <- x + alpha*grad
    grad <- fg(x)[[2]]
    n <- n + 1
  }

  if (n < n.max) {
    return(list("f"=fg(x)[[1]], "x"=x, "Number of steps"=n))
  } else {
    return(NULL)
  }
}
gd2(test2, c(1.5, .4))
## $f
## [1] 1
## 
## $x
## [1] 2.030692 1.401523
## 
## $`Number of steps`
## [1] 724

Play with different \(\alpha\) and \(x_0\).

Newton’s method

In 1-D problems, Newton’s method is simply to apply the Newton-Raphson method to find a root of \(f'\). This makes sense because \(f'(x^*)=0\) is a necessary condition for a local maximum (with a catch of being merely “necessary”). Therefore, assuming that \(f\) is twice continuously differentiable, we have: \[ x_{n+1} = x_n - \frac{f'(x_n)}{f''(x_n)} .\]

For some reason, when the Newton-Raphson method is applied for optimisation, it is just called the Newton’s method. Poor Joseph Raphson

To generalize it to multi-dimensional problems, let’s first recall the key idea used in the Newton-Raphson method. When iteratively searching for a root of \(g\), the method uses linear approximation of \(g\) at the current guess \(x_n\) as it is easy to find a root of linear function. Then, the approximated root becomes our next guess \(x_{n+1}\).

Now, imagine a vector-valued function with \(k\) inputs and \(k\) outputs: \(g:\mathbb{R}^k\to\mathbb{R}^k\). In this case, the equivalents of root and derivative are respectively \(0\) vector and the Jacobian matrix (\(J\)). Doing the same approximation at \(x_n\in\mathbb{R}^k\), we get: \[ g(x) \approx g(x_n) + J(x_n)(x-x_n) .\] Setting the left-hand side equal to \(0\) and rearranging, we derive: \[ x_{n+1} = x_n - J(x_n)^{-1}g(x_n) .\]

We implicitly assume \(f\) is continuously differentiable and has a non-zero determinant at \(x_n\).

Based on this, if we assume the objective function \(f\) is twice continuously differentiable, then we have multi-dimensional Newton’s method: \[ x_{n+1} = x_n - H(x_n)^{-1}\nabla f(x_n) ,\] where \(H\) is the Hessian matrix of \(f\).

To see the correspondence, remember that for \(f:\mathbb{R}^k\to\mathbb{R}\), \(\nabla f\) is a vector-valued function and its Jacobian is the Hessian of \(f\).

Another derivation is:

  1. Start with the 2nd-order Taylor approximation \[f(x_{n+1}) \approx f(x_n) + \nabla f(x_n)(x_{n+1}-x_n) + \frac{1}{2}(x_{n+1}-x_n)^TH(x_n)(x_{n+1}-x_n)\]
  2. Differentiate both sides \[\nabla f(x_{n+1}) \approx \nabla f(x_n) + H(x_n)(x_{n+1}-x_n)\]
  3. Set it equal to 0 and rearrange. \[x_{n+1} = x_n - H(x_n)^{-1}\nabla f(x_n)\]

Application

Let’s apply Newton’s method to the 2-D testbed.

newton <- function(fgh, x0, tol=1e-8, n.max = 100) {
  x <- x0
  grad <- fgh(x)[[2]]
  H <- fgh(x)[[3]]

  n <- 0
  while ((max(abs(grad))>tol) & (n < n.max)) {
    x <- x - solve(H, grad)
    grad <- fgh(x)[[2]]
    H <- fgh(x)[[3]]
    n <- n + 1
  }
  
  if (n < n.max) {
    return(list("f"=fgh(x)[[1]], "x"=x, "Number of steps"=n))
  } else {
    return(NULL)
  }
}
newton(test2, c(1.5, .6))
## $f
## [1] 1.253638e-20
## 
## $x
## [1] 9.899083e-10 1.366392e-09
## 
## $`Number of steps`
## [1] 7

When experimenting various \(x_0\) with this test function, Newton’s method is extremely sensitive to the choice of \(x_0\) and not very reliable.

Since the method is based on root finding of \(\nabla f\), it may converge to wherever the gradient vanishes: minima, maxima or saddle points.

When Newton’s method works, its convergence is quick, often using only a fraction of steps taken by line search. For example, it works for \(f(x,y)=x(x-1)(x+1) - y(y-1)(y+1)\) plotted below. The price for speed is the requirement of second derivatives, which will be infeasible in complex problems.

X = seq(-1, 1, length=100)
Y = seq(-1, 1, length=100)
XY = expand.grid(X=X, Y=Y) # create a grid
f2 <- function(x, y){
  x*(x-1)*(x+1) - y*(y-1)*(y+1)
}
Z <- f2(XY$X, XY$Y)
s = interp(x=XY$X, y=XY$Y, z=Z)
plot_ly(x=s$x, y=s$y, z=t(s$z)) %>% add_surface()